# Image-Text Pretraining
Clip Backdoor Rn50 Cc3m Badnets
MIT
This is a pre-trained backdoor-injected model for studying backdoor sample detection in contrastive language-image pretraining.
Text-to-Image English
C
hanxunh
16
0
Mengzi Oscar Base
Apache-2.0
A Chinese multimodal pretraining model built on the Oscar framework, initialized with Mengzi-Bert base version, trained on 3.7 million image-text pairs.
Image-to-Text
Transformers Chinese

M
Langboat
20
5
Featured Recommended AI Models